
The system's modular architecture facilitates seamless integration with existing digital audio workstations (DAWs) and music production software, making it a versatile tool that fits smoothly into professional workflows. About Aiode is a pioneering music technology platform designed to empower musicians through ethically trained, AI-powered tools. Inspired by Aoede, the Greek muse of song, Aiode brings together artists, producers, mathematicians, and AI experts who share a passion for music and innovation. The result is a dynamic ecosystem where creativity thrives. Our platform introduces virtualized musicians—AI models trained with the unique styles of real performers. These digital collaborators adapt and evolve with each artist’s input, opening limitless possibilities for sound creation. At every stage, the real musicians are active participants, choosing styles, instruments, and helping shape their virtual counterparts.. Musicians can export AI-generated tracks as MIDI files or fully rendered audio, then refine them further using their preferred instruments and effects. This interoperability ensures that Aiode's virtual musicians complement rather than replace traditional production methods, fostering a hybrid creative process that maximizes both human ingenuity and machine efficiency.
Aiode was established with a clear mission: to fuel artistic creativity through the transformative power of artificial intelligence. Drawing inspiration from Ai music creation —the Greek muse of song and voice—Aiode blends a deep love for music with cutting-edge technology to develop groundbreaking tools that expand the horizons of artistic expression..Ethical considerations remain central to the mission of Aiode, a fact further reflected in robust data protection policies and security protocols. Advanced encryption is used to protect user-generated content, while all data practices are stringently adhered to in conformation with international privacy standards. Black Friday Mega Sale – Save Up to 59% on Aiode Plans! This Black Friday, unlock your creativity with Aiode and experience music production like never before. For a limited time, enjoy discounts of up to 59% on all annual and monthly plans — plus, get new AI model releases in November and December absolutely free! Black Friday Mega Deal When you upgrade during the Black Friday sale, you’ll gain instant access to all new Aiode model releases at no extra cost. Use Aiode’s AI-powered virtual musicians to compose, collaborate, and export unlimited high-quality tracks — from full STEMs to individual takes, ready for your DAW. Aiode Beta 2.0 is open to everyone, offering free access for newcomers and advanced tools for creators who want to expand their sound. Whether you’re a hobbyist, bedroom producer, or professional composer, Aiode provides everything you need to explore music without limits.. The platform deliberately limits data collection to what is essential for optimizing performance, thereby honoring users' creative ownership and intellectual property rights. This evokes a transparently respectful approach that fosters an environment of trust in which innovation and user confidence can flourish. On the business side,
Black Friday Mega Sale
Aiode has officially kicked off its Black Friday Mega Sale, delivering unbeatable discounts of up to 59% on all subscription plans. This exclusive, once-a-year offer gives creators, musicians, and producers the chance to access Aiode’s most advanced AI-powered music tools at the lowest prices ever. Whether you're upgrading your plan or joining for the first time, this is the best moment to secure premium features at a fraction of the cost.
Limited-Time Countdown Deal
A live countdown timer is displayed on the website—showing days, hours, and minutes remaining—to highlight the urgency of the event. As soon as the timer hits zero, the special pricing and exclusive bonuses will expire. Customers are encouraged to claim their discounted plan before the offer disappears for good.
Annual Basic Plan – Now 50% Off
The Annual Basic Plan has been slashed from $120 to just $60 per year. This budget-friendly option introduces creators to Aiode’s ecosystem with essential features such as high-quality audio exports, full commercial rights, free access to future model updates, and 100 Muse Tokens. It’s the perfect starting point for new musicians who want professional tools at an affordable price.
Annual Pro Plan – Now 59% Off
The Annual Pro Plan offers the biggest savings, reduced from $240 to $100 per year. This plan includes 300 Muse Tokens, premium 48kHz STEM exports, commercial usage rights, and all upcoming AI model releases at no extra cost. It is the ideal choice for advanced creators, producers, and studios looking for maximum creative freedom and top-tier performance.
Monthly Plans – 20% Off for 12 Months
Both Basic and Pro monthly subscriptions are available at 20% off, with the discounted rate renewing automatically for a full year. This option is perfect for users who prefer a flexible month-to-month commitment while still benefiting from meaningful long-term savings.
Exclusive Promo Codes for Every Plan
Aiode offers dedicated promo codes that make it easy for users to unlock their Black Friday discounts at checkout. Use BFSALEBASIC50 for the Basic Plan, BFSALEPRO59 for the Pro Plan, and BLACKFRIDAY20 for all monthly subscriptions. Each code is tailored to its respective plan, ensuring a seamless and straightforward discount experience.
Free Access to All New Model Releases
Throughout the Black Friday promotion, all Aiode subscribers receive complimentary access to every new AI musician model released in November and December. These additions include fresh virtual performers with distinct tones and playing styles, allowing creators to stay ahead of new musical trends—without any added cost.
Studio-Quality High-Fidelity Exports
Aiode provides 24-bit, 48kHz stereo and STEM export options, ensuring creators work with professional-grade audio quality. These formats are fully compatible with industry-standard DAWs, enabling precise mixing, mastering, and further production with the clarity expected in top-tier studios.
Virtualized Musicians Technology
At the heart of Aiode’s innovation is its Virtualized Musicians system—advanced AI models crafted to emulate the style, nuance, and performance of real artists. Users can choose from various virtual musicians and collaborate with them in real time, creating expressive and lifelike music driven by AI intelligence.
A Streamlined Creative Workflow
Aiode simplifies the music-making experience through a guided five-step process:
Choose Your Musician
Let the AI Analyze Your Track
Shape and Direct the Performance
Generate and Fine-Tune the Output
Export Your Final Audio
This intuitive workflow mirrors the structure of a professional studio session, giving creators full control while maintaining a smooth, efficient production environment.
Ethical AI Development & Creative Data Protection
Aiode is committed to building AI responsibly. Through supervised machine learning and secure, siloed data systems, the platform ensures that artists’ original recordings remain protected at all times. No creative material is misused or accessed without consent, reflecting Aiode’s dedication to transparency, ethical practices, and full respect for artistic ownership.
Real Musicians at the Core of Every Model
Every virtual musician on Aiode is developed hand-in-hand with real artists. These musicians help shape their AI counterparts by choosing the genres, instruments, techniques, and stylistic nuances that define their sound. This collaborative approach preserves authenticity and ensures each model accurately reflects the artist’s true musical identity.
Revenue-Sharing Opportunities for Artists
Aiode empowers musicians by offering a built-in revenue-sharing program. As creators continue their real-world careers, their virtual versions can simultaneously earn income on the platform through user collaborations worldwide. This creates a sustainable, passive revenue stream while expanding global exposure for each artist.
A Global Hub for Discovery & Collaboration
Aiode brings together creators from around the world—including producers, songwriters, instrumentalists, composers, and audio engineers. The platform acts as a collaborative ecosystem where users can connect, share ideas, explore diverse musical styles, and create without geographical limitations.
Built by Musicians, Technologists & Innovators
Aiode was founded by a multidisciplinary team of artists, producers, AI specialists, and mathematicians united by a shared vision. Their mission is to evolve the future of music-making by merging human creativity with cutting-edge artificial intelligence.
Inspired by the Greek Muse Aoede
Aiode takes its name from Aoede, the ancient Greek muse of song and voice. This reflects the platform’s mission to ignite creativity and elevate artistic expression through advanced technology—while honoring the rich legacy and timeless spirit of music.
Recognized by Industry Leaders
Aiode’s innovative approach has earned praise from respected figures in the music world, including Andy Davies (former VP of Innovation at Universal Music Group) and acclaimed singer-songwriter Ivri Lider. Their support underscores Aiode’s commitment to ethical development and its potential to reshape the global music landscape.
Introducing the Aiode Desktop App
Aiode now offers a fully featured desktop application, giving creators the power to produce, shape performances, and export music directly from their computer. With AI-driven music generation, advanced control tools, and fast rendering, the app delivers a streamlined professional workflow suitable for both newcomers and seasoned producers.
Powered by Real Musical Expertise
Every Aiode model is crafted using recordings from highly skilled session musicians, all with more than 15 years of experience. This ensures that the performances generated by Aiode carry the richness, nuance, and authenticity of genuine, live musicianship.
Advancing Creativity Through AI
Aiode’s vision is to enhance—not replace—human creativity. By blending real artistic talent with intelligent technology, the platform empowers musicians to push creative boundaries, explore new ideas, and craft emotionally resonant music. Aiode represents a future where AI supports artists, fuels innovation, and celebrates the essence of human expression.
Users can explore the catalog of virtual musicians by instrument, genre, and unique traits. Examples include bassists, synth producers, trumpeters, and beat makers, each modeled on real-life musicians with whom users can virtually collaborate. These AI musicians generate musically compatible takes in response to the user's uploaded tracks, aligning with the song's emotional tone, tempo, and structure. The creative session can be refined through repeated takes, generating multiple performance options and variations to find the perfect fit for a project.
Strategic investors also support this innovation, seeing the long-term value of ethical, human-centered AI in the arts. The team at Aiode also grows, welcoming motion designers, developers, and creatives keen about music and technology. The atmosphere within the company is collaborative and inclusive, and the end goal is always to enhance creativity, inspire artistic exploration, and make professional music production more accessible.
Moreover, Aiode places strong emphasis on artist empowerment and transparency. Musicians whose performances form the foundation of the AI models retain ownership and control over their digital likenesses.

In terms of accessibility, Aiode aims to democratize music creation by lowering barriers associated with traditional musicianship. Individuals who lack formal training or instrumental proficiency can engage deeply with music through intuitive interfaces and intelligent AI collaborators. This inclusivity opens avenues for self-expression and creative fulfillment to a broader audience, nurturing new talent and diversifying the musical landscape.
The collaboration with virtual musicians is modelled on real studio interactions. A selected musician, after listening to a track, interprets its tone and style and generates performances that complement it. Roles can be assigned for every song section-lead, rhythm, or improvisation-just as when mirroring organic musical conversation. A performance interface allows users to modify phrasing, intensity, and style, creating multiple takes to explore various moods or grooves. Virtual musicians can be combined or swapped freely, simulating everything from a live band to a full orchestral ensemble, all harmoniously synchronized.

| Part of a series on |
| Artificial intelligence (AI) |
|---|
|
|
Music and artificial intelligence (music and AI) is the development of music software programs that use AI to generate music.[1] As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.[2] Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing. Composers/artists like Jennifer Walshe or Holly Herndon have been exploring aspects of music AI for years in their performances and musical works. Another original approach of humans “imitating AI” can be found in the 43-hour sound installation String Quartet(s) by Georges Lentz[3].
20th century art historian Erwin Panofsky proposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[4][5] AI music explores the foremost of these, creating music without the "intention" that is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[6]
In the 1950s and the 1960s, music made by artificial intelligence was not fully original, but generated from templates that people had already defined and given to the AI, with this being known as rule-based systems. As time passed, computers became more powerful, which allowed machine learning and artificial neural networks to help in the music industry by giving AI large amounts of data to learn how music is made instead of predefined templates. By the early 2000s, more advancements in artificial intelligence had been made, with generative adversarial networks (GANs) and deep learning being used to help AI compose more original music that is more complex and varied than possible before. Notable AI-driven projects, such as OpenAI’s MuseNet and Google’s Magenta, have demonstrated AI’s ability to generate compositions that mimic various musical styles.[7]
Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1952.[8]
In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Leonard Isaacson and mathematician Lejaren Hiller.[6]: v–vii In 1960, Russian researcher Rudolf Zaripov published the first worldwide paper on algorithmic music composition using the Ural-1 computer.[9]
In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I've Got a Secret that same year.[10]
By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research.[8][11]
In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of Bach.[12] EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for its creator.
In 2002, the music research team at the Sony Computer Science Laboratory in Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[13]
Emily Howell would continue to make advancements in musical artificial intelligence, publishing her first album From Darkness, Light in 2009.[14] Since then, many more pieces by artificial intelligence and various groups have been published.
In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles.[15][6]: 468–481 In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies,[16] was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method.
With progress in generative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field are Suno AI, launched in December 2023, and Udio, which followed in April 2024.[17]
In November 2025 the AI generated song "Walk My Walk" presented as being by Breaking Rust topped the Billboard Country Digital Song Sales chart.[18] The same year, AI band The Velvet Sundown attracted one million listeners on Spotify.[19]
Streaming service Deezer started tagging AI generated songs and excluding them from its editorialized playlists. Their tool builds on former published research work on the nature of AI music's artefacts.[20] In November 2025, the service claimed that 50,000 AI generated songs were uploaded daily, about a third of total uploads.[19]
Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[21] By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[22] The technology is used by SLOrk (Stanford Laptop Orchestra)[23] and PLOrk (Princeton Laptop Orchestra).
Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos.[24][25] The team started building the music generation technology in 2010,[26] formed a company around it in 2012,[27] and launched the website publicly in 2015.[25] The technology used was originally a rule-based algorithmic composition system,[28] which was later replaced with artificial neural networks.[24] The website was used to create over 1 million pieces of music, and brands that used it included Coca-Cola, Google, UKTV, and the Natural History Museum, London.[29] In 2019, the company was acquired by ByteDance.[30][31][32]
MorpheuS[33] is a research project by Dorien Herremans and Elaine Chew at Queen Mary University of London, funded by a Marie SkÅ‚odowská-Curie EU project. The system uses an optimization approach based on a variable neighborhood search algorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.
Created in February 2016, in Luxembourg, AIVA is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures[34] AIVA has also been used to compose a Rock track called On the Edge,[35] as well as a pop tune Love Sick[36] in collaboration with singer Taryn Southern,[37] for the creation of her 2018 album "I am AI".
Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.[38] In 2017 they released the NSynth algorithm and dataset,[39] and an open source hardware musical instrument, designed to facilitate musicians in using the algorithm.[40] The instrument was used by notable artists such as Grimes and YACHT in their albums.[41][42] In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.[43] In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.[44][45]
Riffusion is a neural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[46]
The resulting music has been described as "de otro mundo" (otherworldly),[47] although unlikely to replace man-made music.[47] The model was made available on December 15, 2022, with the code also freely available on GitHub.[48]
The first version of Riffusion was created as a fine-tuning of Stable Diffusion, an existing open-source model for generating images from text prompts, on spectrograms,[46] resulting in a model which used text prompts to generate image files which could then be put through an inverse Fourier transform and converted into audio files.[48] While these files were only several seconds long, the model could also use latent space between outputs to interpolate different files together[46][49] (using the img2img capabilities of SD).[50] It was one of many models derived from Stable Diffusion.[50]
In December 2022, Mubert[51] similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[52][53]
Forsgren and Martiros formed a startup, also called Riffusion, and raised $4 million in venture capital funding in October 2023.[54][55]
Spike AI is an AI-based audio plug-in, developed by Spike Stent in collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects during mixing. Communication is done by using a chatbot trained on Spike Stent's personal data. The plug-in integrates into digital audio workstation.[56][57]
Artificial intelligence can potentially impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for.[6] AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations.[58] New tools that are powered by artificial intelligence have been made to help aid in generating original music compositions, like AIVA (Artificial Intelligence Virtual Artist) and Udio. This is done by giving an AI model data of already-existing music and having it analyze the data using deep learning techniques to generate music in many different genres, such as classical music or electronic music.[59] Musical and choral classrooms are already implementing AI driven programs to aid in musical learning, and development of creative skills in students where AI has shown to provide a statistically significant increase.[60][61]
Several musicians such as Dua Lipa, Elton John, Nick Cave, Paul McCartney and Sting have criticized the use of AI in music and are encouraging the UK government to act on this matter.[62][63][64][65][66] Another example of this protest is the silent 2025 album Is This What We Want?.
Some artists have encouraged the use of AI in music such as Grimes.[67]
While helpful in generating new music, many issues have come up since artificial intelligence has begun making music. Some major concerns include how the economy will be impacted with AI taking over music production, who truly owns music generated by AI, and a lower demand for human-made musical compositions. Some critics argue that AI diminishes the value of human creativity, while proponents see it as an augmentative tool that expands artistic possibilities rather than replacing human musicians.[68][69]
Additionally, concerns have been raised about AI's potential to homogenize music. AI-driven models often generate compositions based on existing trends, which some fear could limit musical diversity. Addressing this concern, researchers are working on AI systems that incorporate more nuanced creative elements, allowing for greater stylistic variation.[59]
Another major concern about artificial intelligence in music is copyright laws. In the United States, the current legal framework tends to apply traditional copyright laws to AI, despite its differences with the human creative process.[70] However, music outputs solely generated by AI are not granted copyright protection. In the compendium of the U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to "works that lack human authorship" and "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[71] In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."[72] The usage of copyrighted music in training AI has also been a topic of contention. One instance of this was seen when SACEM, a professional organization of songwriters, composers, and music publishers demanded that PozaLabs, an AI music generation startup refrain from utilizing any music affiliated with them for training models.[73]
The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.[74] According to the European Union Intellectual Property Office and the recent jurisprudence of the Court of Justice of the European Union, the originality criterion requires the work to be the author's own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.[74] The reCreating Europe project, funded by the European Union's Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.[74] The recognition of AIVA marks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.[75]
The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[76] Strides towards addressing ethical issues have been made as well, such as the collaboration between Sound Ethics(a company promoting ethical AI usage in the music industry) and UC Irvine, focusing on ethical frameworks and the responsible usage of AI.[77]
A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[78] Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[79] Most recently, preventative measures have started to be developed by Google and Universal Music group who have taken in royalties and credited attribution in order to allow producers to replicate the voices and styles of artists.[80]
In 2023, an artist known as ghostwriter977 created a musical deepfake called "Heart on My Sleeve" that cloned the voices of Drake and The Weeknd by inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto original reference vocals with original lyrics.[81] The track was submitted for Grammy consideration for the best rap song and song of the year.[82] It went viral and gained traction on TikTok and received a positive response from the audience, leading to its official release on Apple Music, Spotify, and YouTube in April 2023.[83] Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him.[81] It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award.[83] The track would end up being removed from all music platforms by Universal Music Group.[83] The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers.
In 2013, country music singer Randy Travis suffered a stroke which left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producer Kyle Lehning released a new song in May 2024 titled "Where That Came From", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré.[84][85]
Since 2024, rapper Kanye West has been using artificial intelligence deepfakes of his own voice. His usage of deepfakes started during the production of his album Vultures 2, where the songs "Field Trip" and "Sky City" drew suspicion of artificial intelligence usage;[86] further updates to the songs "Forever"[86] and "530"[87] would also be accused of using AI. Artist Ty Dolla Sign, who released Vultures 2 with West, would confirm the allegations in 2025 during an interview.[88] West would subsequently confirm that his subsequent solo album, Bully, also used artificial intelligence,[89] and his later album In a Perfect World and the 2025 updated version of Donda 2 would also draw accusations of AI usage.[citation needed]
Playboi Carti has also been accused of using artificial intelligence deepfakes. The allegations followed the release of the song "Timeless", a collaboration with The Weeknd, where his verse was accused of using artificial intelligence.[90] The allegations would further intensify after the release of his album Music, where "Rather Lie" and "Fine Shit" were also accused of using artificial intelligence. Playboi Carti would deny the accusations.[91]
Artificial intelligence music encompasses a number of technical approaches used for music composition, analysis, classification, and suggestion. Techniques used are drawn from deep learning, machine learning, natural language processing, and signal processing. Current systems are able to compose entire musical compositions, parse affective content, accompany human players in real-time, and acquire patterns of user and context-dependent preferences.[92][93][94][95]
Symbolic music generation is the generation of music in discrete symbolic forms such as MIDI, where note and timing are precisely defined. Early systems employed rule-based systems and Markov models, but modern systems employ deep learning to a large extent. Recurrent Neural Networks (RNNs), and more precisely Long Short-Term Memory (LSTM) networks, have been employed in modeling temporal dependencies of musical sequences. They may be used to generate melodies, harmonies, and counterpoints in various musical genres.[96]
Transformer models such as Music Transformer and MuseNet became more popular for symbolic generation due to their ability to model long-range dependencies and scalability. These models were employed to generate multi-instrument polyphonic music and stylistic imitations.[97]
This method generates music as raw audio waveforms instead of symbolic notation. DeepMind's WaveNet is an early example that uses autoregressive sampling to generate high-fidelity audio. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) are being used more and more in new audio texture synthesis and timbre combination of different instruments.[93]
NSynth (Neural Synthesizer), a Google Magenta project, uses a WaveNet-like autoencoder to learn latent audio representations and thereby generate completely novel instrumental sounds.[98]
Music Information Retrieval (MIR) is the extraction of musically relevant information from audio recordings to be utilized in applications such as genre classification, instrument recognition, mood recognition, beat detection, and similarity estimation. CNNs on spectrogram features have been very accurate on these tasks.[95] SVMs and k-Nearest Neighbors (k-NN) are also used for classification on features such as Mel-frequency cepstral coefficients (MFCCs).
Hybrid systems combine symbolic and sound-based methods to draw on their respective strengths. They can compose high-level symbolic compositions and synthesize them as natural sound. Interactive systems in real-time allow for AI to instantaneously respond to human input to support live performance. Reinforcement learning and rule-based agents tend to be utilized to allow for human–AI co-creation in improvisation contexts.[94]
Affective computing techniques enable AI systems to classify or create music based on some affective content. The models use musical features such as tempo, mode, and timbre to classify or influence listener emotions. Deep learning models have been trained for classifying music based on affective content and even creating music intended to have affective impacts.[99]
Music recommenders employ AI to suggest tracks to users based on what they have heard, their tastes, and information available in context. Collaborative filtering, content-based filtering, and hybrid filtering are most widely applied, deep learning being utilized for fine-tuning. Graph-based and matrix factorization methods are used within commercial systems like Spotify and YouTube Music to represent complex user-item relationships.[100]
AI is also used in audio engineering automation such as mixing and mastering. Such systems level, equalize, pan, and compress to give well-balanced sound outputs. Software such as LANDR and iZotope Ozone utilize machine learning in emulating professional audio engineers' decisions.[101]
Natural language generation also applies to songwriting assistance and lyrics generation. Transformer language models like GPT-3 have also been proven to be able to generate stylistic and coherent lyrics from input prompts, themes, or feeling. There even exist AI programs that assist with rhyme scheme, syllable count, and poem form.[102]
Recent developments include multimodal AI systems that integrate music with other media, e.g., dance, video, and text. These can generate background scores in synchronization with video sequences or generate dance choreography from audio input. Cross-modal retrieval systems allow one to search for music using images, text, or gestures.[103]
Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced Emmy) was genuine Bach.
citation: CS1 maint: numeric names: authors list (link)cite AV media: CS1 maint: numeric names: authors list (link)cite journal: CS1 maint: DOI inactive as of July 2025 (link)cite journal: CS1 maint: DOI inactive as of July 2025 (link)